58 research outputs found

    Empathic Responses to Affective Film Clips Following Brain Injury and the Association with Emotion Recognition Accuracy

    Get PDF
    Objective To compare empathic responses to affective film clips in participants with traumatic brain injury (TBI) and Healthy controls (HCs), and examine associations with affect recognition. Design Cross sectional study using a quasi-experimental design. Setting Multi-site study conducted at a post-acute rehabilitation facility in the USA and a University in Canada. Participants A convenience sample of 60 adults with moderate to severe TBI and 60 HCs, frequency matched for age and sex. Average time post-injury was 14 years (range: .5-37) Main Outcome Measures Participants were shown affective film clips and asked to report how the main character in the clip felt and how they personally felt in response to the clip. Empathic responses were operationalized as participants feeling the same emotion they identified the character to be feeling. Results Participants with TBI had lower emotion recognition scores (p=.007) and fewer empathic responses than HCs (67% vs. 79%; p<.001). Participants with TBI accurately identified and empathically responded to characters’ emotions less frequently (65%) than HCs (78%). Participants with TBI had poorer recognition scores and fewer empathic responses to sad and fearful clips compared to HCs. Affect recognition was associated with empathic responses in both groups (p<.001). When participants with TBI accurately recognized characters’ emotions, they had an empathic response 71% of the time, which was more than double their empathic responses for incorrectly identified emotions. Conclusions Participants with TBI were less likely to recognize and respond empathically to others’ expressions of sadness and fear, which has implications for interpersonal interactions and relationships. This is the first study in the TBI population to demonstrate a direct association between an affect stimulus and an empathic response

    Sex Differences in Emotion Recognition and Emotional Inferencing Following Severe Traumatic Brain Injury

    Get PDF
    The primary objective of the current study was to determine if men and women with traumatic brain injury (TBI) differ in their emotion recognition and emotional inferencing abilities. In addition to overall accuracy, we explored whether differences were contingent upon the target emotion for each task, or upon high- and low-intensity facial and vocal emotion expressions. A total of 160 participants (116 men) with severe TBI completed three tasks – a task measuring facial emotion recognition (DANVA-Faces), vocal emotion recognition (DANVA-Voices) and one measuring emotional inferencing (emotional inference from stories test (EIST)). Results showed that women with TBI were significantly more accurate in their recognition of vocal emotion expressions and also for emotional inferencing. Further analyses of task performance showed that women were significantly better than men at recognising fearful facial expressions and also facial emotion expressions high in intensity. Women also displayed increased response accuracy for sad vocal expressions and low-intensity vocal emotion expressions. Analysis of the EIST task showed that women were more accurate than men at emotional inferencing in sad and fearful stories. A similar proportion of women and men with TBI were impaired (≥ 2 SDs when compared to normative means) at facial emotion perception, χ2 = 1.45, p = 0.228, but a larger proportion of men was impaired at vocal emotion recognition, χ2 = 7.13, p = 0.008, and emotional inferencing, χ2 = 7.51, p = 0.006

    Sex Differences in Emotional Insight After Traumatic Brain Injury

    Get PDF
    Objective To compare sex differences in alexithymia (poor emotional processing) in males and females with traumatic brain injury (TBI) and uninjured controls. Design Cross-sectional study. Setting TBI rehabilitation facility in the United States and a university in Canada. Participants Sixty adults with moderate to severe TBI (62% men) and 60 uninjured controls (63% men) (N=120). Interventions Not applicable. Main Outcome Measures Toronto Alexithymia Scale-20 (TAS-20). Results Uninjured men had significantly higher (worse) alexithymia scores than uninjured female participants on the TAS-20 ( P=.007), whereas, no sex differences were found in the TBI group (P=.698). Men and women with TBI had significantly higher alexithymia compared with uninjured same-sex controls (both P<.001). The prevalence of participants with scores exceeding alexithymia sex-based norms for men and women with TBI was 37.8% and 47.8%, respectively, compared with 7.9% and 0% for men and women without TBI. Conclusions Contrary to most findings in the general population, men with TBI were not more alexithymic than their female counterparts with TBI. Both men and women with TBI have more severe alexithymia than their uninjured same-sex peers. Moreover, both are equally at risk for elevated alexithymia compared with the norms. Alexithymia should be evaluated and treated after TBI regardless of patient sex

    Exploration of a new tool for assessing emotional inferencing after traumatic brain injury

    Get PDF
    bjective: To explore validity of an assessment tool under development—the Emotional Inferencing from Stories Test (EIST). This measure is being designed to assess the ability of people with traumatic brain injury (TBI) to make inferences about the emotional state of others solely from contextual cues. Methods and procedures: Study 1: 25 stories were presented to 40 healthy young adults. From this data, two versions of the EIST (EIST-1; EIST-2) were created. Study 2: Each version was administered to a group of participants with moderate-to-severe TBI—EIST 1 group: 77 participants; EIST-2 group: 126 participants. Participants also completed a facial affect recognition (DANVA2-AF) test. Participants with facial affect recognition impairment returned 2 weeks later and were re-administered both tests. Main outcomes: Participants with TBI scored significantly lower than the healthy group mean for EIST-1, F(1,114) = 68.49, p < 0.001, and EIST-2, F(1,163) = 177.39, p < 0.001. EIST scores in the EIST-2 group were significantly lower than the EIST-1 group, t = 4.47, p < 0.001. DANVA2-AF scores significantly correlated with EIST scores, EIST-1: r = 0.50, p < 0.001; EIST-2: r = 0.31, p < 0.001. Test–re-test reliability scores for the EIST were adequate. Conclusions: Both versions of the EIST were found to be sensitive to deficits in emotional inferencing. After further development, the EIST may provide clinicians valuable information for intervention planning

    A sensorimotor control framework for understanding emotional communication and regulation

    Get PDF
    JHGW and CFH are supported by the Northwood Trust. TEVR was supported by a National Health and Medical Research Council (NHMRC) Early Career Fellowship (1088785). RP and MW were supported by the the Australian Research Council (ARC) Centre of Excellence for Cognition and its Disorders (CE110001021)Peer reviewedPublisher PD

    The role of audition in audiovisual perception of speech and emotion in children with hearing loss

    No full text
    Integration of visual and auditory cues during perception provides us with redundant information that greatly facilitates processing. Hearing loss affects access to the acoustic cues essential to accurate perception of speech and emotion, potentially impacting audiovisual integration. This chapter explores the various factors that may impact auditory processing in persons with hearing loss, including communication environment, modality preferences, and the use of hearing aids versus cochlear implants. Audiovisual processing of both speech and emotion by children with hearing loss is also discussed. © Springer Science+Business Media New York 2013

    The role of audition in audiovisual perception of speech and emotion in children with hearing loss

    No full text
    Zupan, BA ORCiD: 0000-0002-4603-333XIntegration of visual and auditory cues during perception provides us with redundant information that greatly facilitates processing. Hearing loss affects access to the acoustic cues essential to accurate perception of speech and emotion, potentially impacting audiovisual integration. This chapter explores the various factors that may impact auditory processing in persons with hearing loss, including communication environment, modality preferences, and the use of hearing aids versus cochlear implants. Audiovisual processing of both speech and emotion by children with hearing loss is also discussed. © Springer Science+Business Media New York 2013

    Recognition of high and low intensity facial and vocal expressions of emotion by children and adults

    No full text
    The ability to accurately identify facial and vocal cues of emotion is important to the development of psychosocial well-being. However, the developmental trajectory and pattern of recognition for emotions expressed in the facial versus vocal modality remain unclear. The current study aimed to expand upon the literature in this area by examining differences in the identification of high and low intensity facial versus vocal emotion expressions by participants in four separate age groups, making a novel contribution to the literature. The Diagnostic Analysis of Nonverbal Accuracy Scale-Second Edition, a standardized test of emotion recognition that includes previously validated high and low intensity expressions in each modality was administered to a total of 40 participants, 10 in each of four age groups (preschoolers, school-aged children, early adolescents, adults). Results showed that as age increased, accuracy of recognition for both facial and vocal emotion expressions increased. Adult- like proficiency for facial emotion recognition was reached by school-aged children but did not occur for vocal affect recognition until early adolescence. Intensity differentially impacted the recognition of facial and vocal emotion expressions, with increased intensity leading to better recognition of facial, but not vocal expressions. Happy was the emotion best recognized in facial emotion expressions and angry was best recognized in vocal emotion expressions but patterns of recognition for the remaining emotions varied across the two modalities and across age groups. Overall, results indicate that recognition of vocal emotion expressions lags behind that of visual and that the intensity and emotion expressed differentially influence recognition across these two modalities

    Examining vocal affect in natural versus acted expressions of emotion

    No full text
    Zupan, BA ORCiD: 0000-0002-4603-333XV ocal Affect (V A) is one of the most significant characteristics by which human emotion is identified. During a typical face-to-face conversational exchange, a listener is able to use numerous sources of information to discern the intended meaning of the speaker including facial, gestural and vocal cues.However, some real-life situations, such as having a telephone conversation or in the case of visual deficits, force one to rely on only VA. This type of scenario was of particular interest to us. Thus, the aim of the current investigation was to examine a person’s ability to recognize emotion in a vocal message in the absence of facial and gestural cues
    • …
    corecore